Distance Learning in Discriminative Vector Quantization
نویسندگان
چکیده
منابع مشابه
Distance Learning in Discriminative Vector Quantization
Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods...
متن کاملLearning Vector Quantization With Alternative Distance Criteria
An adaptive algorithm for training of a Nearest Neighbour (NN) classifier is developed in this paper. This learning rule has got some similarity to the well-known LVQ method, but using the nearest centroid neighbourhood concept to estimate optimal locations of the codebook vectors. The aim of this approach is to improve the performance of the standard LVQ algorithms when using a very small code...
متن کاملMatrix adaptation in discriminative vector quantization
Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers which are based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of t...
متن کاملGeneralized Learning Vector Quantization
We propose a new learning method, "Generalized Learning Vector Quantization (GLVQ)," in which reference vectors are updated based on the steepest descent method in order to minimize the cost function . The cost function is determined so that the obtained learning rule satisfies the convergence condition. We prove that Kohonen's rule as used in LVQ does not satisfy the convergence condition and ...
متن کاملHabituation in Learning Vector Quantization
A modification of Kohonen's Learning Vector Quanti zation is proposed to hand le hard cases of supervised learning with a rugged decision surface or asymmetries in the input dat a structure. Cell reference points (neurons) are forced to move close to the decision surface by successively omit ting input data that do not find a neuron of the opposite class within a circle of shrinking radius . Th...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Neural Computation
سال: 2009
ISSN: 0899-7667,1530-888X
DOI: 10.1162/neco.2009.10-08-892